翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

simultaneous perturbation stochastic approximation : ウィキペディア英語版
simultaneous perturbation stochastic approximation
Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive recent book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).
SPSA is a descent method capable of finding global minima, sharing this property with other methods as simulated annealing. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control u^
* with loss
function J(u):
:u^
* = \arg \min_ J(u).
Both Finite Differences Stochastic Approximation (FDSA)
and SPSA use the same iterative process:
:u_ = u_n - a_n\hat_n(u_n),
where u_n=((u_n)_1,(u_n)_2,\ldots,(u_n)_p)^T
represents the n^ iterate, \hat_n(u_n) is the estimate of the gradient of the objective function g(u)= \fracJ(u) evaluated at , and \ is a positive number sequence converging to 0. If u_n is a ''p''-dimensional vector, the i^ component of the symmetric finite difference gradient estimator is:
:FD: (\hat(u_n))_i = \frac,
''1 ≤i ≤p'', where e_i is the unit vector with a 1 in the i^
place, and c_nis a small positive number that decreases with ''n''. With this method, ''2p'' evaluations of ''J'' for each g_n are needed. Clearly, when ''p'' is large, this estimator loses efficiency.
Let now \Delta_n be a random perturbation vector. The i^ component of the stochastic perturbation gradient estimator is:
:SP: (\hat(u_n))_i = \frac.
Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all ''p'' components). The number of loss function measurements needed in the SPSA method for each g_n is always 2, independent of the dimension ''p''. Thus, SPSA uses ''p'' times fewer function evaluations than FDSA, which makes it a lot more efficient.
Simple experiments with ''p=2'' showed that SPSA converges in the same number of iterations as FDSA. The latter follows approximately the steepest descent direction, behaving like the gradient method. On the other hand, SPSA, with the random search direction, does not follow exactly the gradient path. In average though, it tracks it nearly because the gradient approximation is an almost unbiased
estimator of the gradient, as shown in the following lemma.
== Convergence lemma ==
Denote by
:b_n = E() -\nabla J(u_n)
the bias in the estimator \hat_n. Assume that \ are all mutually independent with zero-mean, bounded second
moments, and E(|(\Delta_n)_i|^) uniformly bounded. Then b_n→0 w.p. 1.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「simultaneous perturbation stochastic approximation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.